Starting with the 2015-2016 year, Canadian engineering institutions are required by the Canadian Engineering Accreditation Board (CEAB) to meet the new outcomes-based and data-informed continuous improvement accreditation criteria. Institutions have been working diligently to plan, develop and refine their assessment efforts. Yet there was little information on how institutions were developing their respective approach to meet the requirements, and even less information on the specific aspects, viewpoints, goals and barriers that are common to all. To this end, the first EGAD National Snapshot Survey was created.
The EGAD national snapshot survey was distributed all accredited engineering institutions across Canada, specifically to those leading the accreditation efforts. In the event where this individual was unknown, the survey was sent to the Dean of engineering with a request to forward it to the appropriate party. Preliminary results from the survey were presented at a special session at the Canadian Engineering Education Association to help facilitate a discussion about accreditation questions, challenges and issues. The presentation is available on the EGAD Project website[@citation].
It has been two years since the initial survey, and EGAD members felt there had been a change in the overall community with respect to accreditation. Conversations were becoming deeper and more nuanced, and programs appear to be transitioning away from the procedural aspects of outcomes and indicators, and moving towards the use of assessment data, continuous improvement and data and change management. To capture this shift, and chronicle the change the EGAD Snapshot Survey was run for a second time. The survey was minimally changed, adding a single question to address the cost of supporting graduate attribute assessment and continuous improvement.
This paper presents the results from the second administration of the survey, contrasted with the initial administration.
The EGAD national snapshot survey drew inspiration and select prompts from the National Institute for Learning Outcomes Assessment (NILOA) survey of provosts (or designates) of American universities and colleges. This survey sought to investigate current assessment activities and how the institutions were using evidence of student learning outcomes [@citation]. Items were modified to fit the context of Canadian engineering. Additional items to address specific elements and themes were constructed by the EGAD Project Group.
In total there were 24 responses to the first administration of they survey and 17 for the second. This provides a representative sample from 30 institutions out of a possible 43 Canadian institutions with accredited engineering programs, illustrated on the map below.
Figure 1: Institutions Represented in the EGAD National Snapshot Survey
The survey was sent to those thought to be in charge of their institution’s process. If that person was unknown, it was sent to the Dean. The differences in role between those who responded to the survey is illustrated in Table 1 below.
| Year | Associate Dean | Director | Specialized Position |
|---|---|---|---|
| 2013-2014 | 17 | 3 | 5 |
| 2015-2016 | 10 | 3 | 4 |
The relative sizes of the responding programs are illustrated below in Table 2. There is a slight bias in the second year of the survey towards the medium to larger programs. A similar trend is noticed when classifying institutions by the number of programs they offer, illustrated in Table 3 below .
| Year | Small (0-<1000) | Medium (1000-3000) | Large (3000+) |
|---|---|---|---|
| 2013-2014 | 7 | 10 | 6 |
| 2015-2016 | 3 | 6 | 8 |
| Year | 1-4 | 5-10 | 10+ |
|---|---|---|---|
| 2013-2014 | 6 | 15 | 3 |
| 2015-2016 | 3 | 6 | 4 |
The first administration of the survey was delivered prior to programs being evaluated on the Graduate Attributes. In the first administration over 50% of the respondents had an accreditation visit looming in the next two years. Similar characteristics were present in the second administration, along with a greater number of programs waiting 5+ years for their next visit, illustrated in Table 4 below.
| Year | Within a Year | Two Years | Three Years | Four Years | Five Years or More |
|---|---|---|---|---|---|
| 2013-2014 | 9 | 4 | 6 | 4 | 2 |
| 2015-2016 | 5 | 4 | 3 | 0 | 5 |
Overall, these questions provide a means of deteriming how representative the data is. Taking into account the numerical and geospatial distributions in the responses, the data can be considered at the National scale.
This goal of this section is to highlight common aspects and differences of process management, points of view, approach, progress, collaboration and support for accreditation.
Nearly half institutions across both administrations report that an Associate Dean or equivalent position is directly responsible for their institutional approach. Interestingly enough, individual faculty members and committee style approaches are on the rise. The greatest shift occuring between administrations has been the rise of specialized positions to help manage accreditation processes. A number of institutions have reported full and part-time staff positions (Accreditation or Assessment Coordinators), instructors being provided teaching relief and specialized lecturer positions blending teaching responsibilties with supporting their instutitonal process. This represents a shift from 8.8% of respondents to 20%. This potentially represents institutions realizing that these processes are time-intensive, and to ensure a sustianable workload they should be supported by a full-time or distributed part-time positions.
Figure 2: Who is the individual most directly responsible for helping develop, coordinate, and report your institutions approach?
Collectively, across both administrations of the survey, respondents have viewed accreditation in a positive light (Figure 3). Most responses in the “other” category highlighted a combinations of program quality and cultural shifts, but also revealed that there could be signifcant improvements made to the process.
Figure 3: How do you view the outcomes based accreditation process?
The approach utilized by institutions saw minor changes. More institutions reported adopting singular approaches for the entire institions (increase of 9.4%) with departmental led approaches falling out of favour. The majority of instutitions still report that a singular approach with sufficient latitude at the department level, shown below in Figure 4.
Figure 4: How would your institutional approach be characterized?
In the first administration of the survey, programs had placed a great deal of emphasis on completing the early phases of and outcomes-based continuous improvment process, and hadn’t reached the point of analyzing data, improving curriculum and closing the loop. The most recent data from the survey illustrates significant headway into these realms (Figure 5). Programs are sifting through the assessment data and are engaging in meaning-making activities. Curriculum and program improvement has increased with 25% more programs reporting activity in this area. Closing the loop is still an area of much needed focus, as programs are still trying to sustianably develop processes and governance that would truly make the processes integrated in program culture. Other responses include enaging external stakeholders, curriculum review and renewal processes which could be classified across multiple categories.
Figure 5: Which activities for outcomes-based curriculum improvement have you completed or already have in place??
Engineers by nature are a collaborative group, with next to no change between survey administrations. The interesting results from this set of questions was the focus of the collaborative activities, which echoes the changes illustrated in the previous sections (Figure 6). More engineering programs are collaborating on assessment and data collection, analysis and interpretation of data, curriculum and program improvement and indentifying people to involve in the process. This highlights the focus of the applied stages of the continuous improvement process and the maturation of instutitions in their accreditation efforts. The other categories saw a lot of improvement, with programs collaborating broadly with technical and data experts and other accredited programs outside of engineering (Business, Vetrinary Science, Nursing, etc.). The focus of largely centered around gaining perspective, insight and capitalizing on shared experiences.
Figure 6: Throughout your approach have you collaborated with other colleagues outside of your program/faculty, with whom and focusing on what?
There was a great deal of change regarding collaboration with external stakeholders (Figure 7). There was much greater involvement with nearly all external stakeholders, with the exception of the Government. This is encouraging to see as programs wishing to develop a more comprehensive and inclusive assessment process should involve as many stakeholders as possible. The information gleaned from their external viewpoint can provide an unbiased perspective, providing insight and in time, contextualize the impact of potential program improvements.
Figure 7: In developing your approach, did you collaborate with other stakeholders, and with whom?
Support structures and resources are critical to continuous improvement processes. There has been shift in the percevied importance of structure from the first administration to the second. This shift may be attributed to the knowledge and experienced gained by programs over time, and due to feedback resultant from CEAB visits. Regarding specific structures in Figure 8 below, we can see the increased importance of assessment committees, sssessment management systems & software, funds targets for outcomes assessment, professional staff dedicated to assessment and student participation in assessment activities.
Figure 8: To what extent do the following structures, resources and features support your process?
In terms of what would be most helpful overall there wasn’t much change between administrations with a few notable exceptions (Figure 9). More institutions have replied that increased professional development, more valid and reliable assessment measures, more opportunities for collaboration and greater sharing and access to assessment results across programs are of greater importance. These small, but noticable shifts are indicative of the collective movement of programs towards effective practises in continuous program improvement and closing the loop : more knowledge about effective practise, sharing knowledge and expertise, and greater transparency in the use of assessment results.[@citation for NILOA and Approaching the Loop]
Figure 9: What would be most helpful for your institutions approach to outcomes-based assessment?
The use and role of technology to support continuous program improvement can be a confusing and sometimes contentious area, in which engineers happily tread. There were no notable shifts between survey administration, at least between the technology groups that comprised the question (Figure 10). Given the state of current educational technology, a singular technology solution does not exist. Instead, institutions are having to develop their own ad-hoc approach and leverage multiple systems to meet the needs of their approach. Engineers like a good problem, and it is unsurising that the ad-hoc approach is most popular. This allows instutions the capability to develop systems that are flexible to the unique needs and scale of their programs. Despite the considerable variance introduced by institutional and program difference, common elements and aspects do exist.
Most ad-hoc or homegrown systems utilize 5 key components: 1) a learning management system, 2) a data warehouse, 3) a templating system, 4) analysis and reporting system and 5) process management. Learning management systems are the easy piece, with the majority of programs using Brightspace, Blackboard or Moodle. The data warehouse tends to be a database (MySQL, Acces, MSSQL) or a cloud-based system (Sharepoint, Owncloud, etc.) used for the secure storage and access to data. The templating system, largely used for data collection prior to analysis and storage tends to be a spreadsheet based approach (Excel or web-application). The final system, analysis and reporting is used for statistical analysis, dissemintation and communication of the results of outcomes assessment and program improvement to administration, programs, students and instructors. This part is underrepresented in most responses but the common approaches include Excel, Web-based applications and commerical vendors. Process and project management is also underrepresented, with the majority of approaches choosing to manage this via committee and documenting this in an analog manner, with alternative approaches include VENA solutions and other issue management tracking systems.
Figure 10: Which approach is being adopted in your engineering programs for the different technologies?
As programs learn to navigate the assessment and continuous improvement process their comfort and satisfaction with tools has changed. At first glance, Figure 11 appears to indicate that more respodents are happier with the tools now than they were two years ago. Upon closer inspection, there has been a shift moving from “unsatisfied” to “neutral”. A possible explanation for this is the relative unfamiliarity of programs with tools and systems to support this process or simply the task of trying to make a legacy system peform to ‘off-label’ specifications. This can lead to fatigue and dissatisfaction with the tools, leading to respondents being more wary of their current tools which may be selected as a means to an end rather than an integral piece (e.g. neutral rather than unsatisfied.)
Figure 11: To the best of your knowledge, how satisfied are your programs with the tool(s)?
Trends in the frequency of data collection have also experienced a shift (Figure 12). More programs have reported moving to a continuous model, collecting assessment data every semester or term, which accompanies a reduction in those using a yearly collection schedule. Responses from the other category in both years reveal the use of rolling assessment or tageted collection of specific attributes in different years with little difference between the survey administrations.
Figure 12: In your engineering programs, how often do you plan to collect data?
Following the use of sampling and rolling assessment in the previous quesiton, institutions have been reporing an increase in sampling use (Figure 13). One explanation for this shift is an attempt to reduce the assessment workload for instructors to faciliate buy-in to the approach. Another reason may be differences in assessment tools. Certain assessment tools (e.g Program-level rubrics) work best representative sampling, particularly if the institutional approach is focusing on cohort level anaysis. Instituions have also reported the use of sampling as a stopgap measure, employing it when necessary to meet specific goals.
Figure 13: In planning to collect outcomes assessment data, are you using sampling students in a cohort or do you plan to assess the entire cohort?
Unsurprisingly, all institutions across both survey administrations are assessing certain indicators using group-based artifacts (Figure 14). Nearly all capstone experiences invovled team-based activities so this result was more or less expected. A potential extension from this question may be how do programs distinguish individual performance within group activities, as the outcomes resultant from a group submission may not necessarily reflect an individuals level of competence.
Figure 14: In your engineering programs, are you planning to assess certain indicators using group-based artifacts (e.g. capstone project reports)?
Most of the methods used in outcomes-based assessment have seen a growth between survey adminstrations (Figure 15). Speaking specifically to the most notable shifts, instutions are using more direct observation, open-ended problems, logbooks, co-op or internship programs and portfolios. This represents the continued migration to obtaining high quality, authentic assessment data on student performance. Additionally, institutions are realizing that with collection of diverse types of data the more reliable and comprehensive the insights drawn from that data.
Figure 15: To your knowledge, what tools, methods or experiences are used for outcomes-based assessment within your faculty?
More programs are engaging in data-informed improvement, illustrated in the shift between survey administrations in Figure 16. Specifically, the reduction of the “Not at All” category in in the second administration compared to the first. There is an excepton with curriculuar requirements and course changes experiencing a slight decline, but with an increase in the higher intensity change categories. This is to be expected, with more programs having a continuous improvement process that is reaching maturity. The relative focus of these changes are occuring more at the department, faculty and course levels respectively. The change efforts, for the most part, are limited primarily to engineering with little change effect happening at the institutional level. There is some change afoot with the results of the second administration show some headway in influencing processes and policies outside of the Faculty.
Figure 16: To what extent have you made changes in policies, programs or practises based on outcomes-based assessment results for each of the following?
Contrary to the results of the previous section, we may have made changes resulting from the data of outcomes-based assessment but we have little evidence of the impact of the results on student learning. This issue hasn’t changed much between administrations of the survey, with 80% of respondents reporting no evidence of the graduate attributes process impact on increased student learning (Figure 17). In the few programs that reported having evidence, most of this was indicated by increased quality of learning (increase in student performance) and resultant curriculum changes. These results indicate a clear opportunity for programs to focus future development. Collecting evidence that the process is working to improve student learning is critical in determining both effectiveness and quality of a continuous improvement process.
Figure 17: Evidence of impact of outcomes-based continuous improvement, and the nature of that evidence.
Institutions are progressing towards greater transparency with stakeholders, with notable shifts occuring between survey administrations (Figure 18). Programs are working towards sharing more learning outcomes, impact of the use of assessment data, evidence of student learning, assessment resources and how they use data to improve. Other categories experienced a slight decline, but this may be explained by institutions focusing on demonstrating the results of the process rather than the underlying plans. While this is commendable, institutions should keep articulating the plans and vision that underly the assessment and the results to continue provide background and context to stakeholders, as they may not be familiar with recent changes in programs and the curriculum.
Figure 18: In your programs, do you plan to share the following items with stakeholders?
These next questions were new additions to the second adminsitration of the survey. These were included to investigate the cost of supporting continuous improvement processes, particularly with respect to personnel cost. The chart presented below in Figure 19 a treemap, which is a method for visualizating hierarchical data. In this particular treemap, the colors represent the different categories of personnel, each clearly labeled at the top left with the area is porportional to the overall number of responses contained in each subcategory. The sub-categories are the respective FTE equivalent for that position, labeled in the center, with the area of each being porportional to the number of survey responses indicated at the bottom right (Bigger the rectangle, the bigger the emphasis on that role). Programs are devoting more personnel resources to administrative spport, with a great number of full or more than full time positions. This is followed by faculty release, which typically serves the same purpose temporarily replacing an administrative role with a faculty member. Computer programmers to help develop the ad-hoc custom systems and software are teh next highest, and typically utilized as partial FTE. Curriculum specalist and data analys round out the categories, with more full time positions in the specialist role than the data analyst. These are interesting results, and directly representative of the significant workload resulant from the accreditation mandate. It would also be interesting to chronicle the amount of effort required during an accreditation year to see what ‘overload capacity’ may look like.
It should be noted that the “Other” category was for the most part used as a descriptor for non conventional role. The ‘other’ responses ranged from institutions noting planned increases in faculty release, to the establishment of 6 specialized roles and 4 support positions, to responsibility spread across multiple faculty members.
Figure 19: What approximate additional annual costs must does your program devote to support the graduate attribute processes in your programs?
The second new question asked if there were any significant one-time or repeating annual costs deveoted towards supporting the graduate attribute assessment process. Most programs reported that there were none (13 out of 17), but the remaining 4 institutions indicated expenditures in software licensing, professional development for faculty and staff and educational consultants to help review programs for impending visits.
The final new question on the survey asked institutions if they see the the costs required to support graduate attribute processes growing over the next five years. 82% of institutions replied that they see the support costs increasing a moderate amount over the next 5 years (Figure 20). This is an important issue, as these costs scale with the size of the program and disregarding them can serious imperil the sustainability of a continuous improvement process. Collectively, these questions are uncovering these financial sustainability issues and illustrate that careful planning is needed for long-term implementation.
Figure 20: Over the next five years, to what extent do you expect the costs to support the graduate attribute processes to continue to grow?
There are also seven additional open-resonse questions. Due to the complexity of analyzing these, they have been excluded from the results of the paper. Preliminary analysis on these items will be included as part of the conference presentation.
The constrated results from both surveys have illustrated some interesting shifts in practise and viewpoints regarding outcomes-based assessment, continuous improvement and graduate attributes accreditation. Collectively, these results illustrate the gradual change towards embracing an external mandate for a truly useful purposes. The responses to items illustrate a community that is embracing the opportunity to use this to facilitate a cultural shift within the engineering discipline, moving away from the traditional outdated focus of education by transmission of knowledge, opaque and confusing assessment practises and sparse curriculum renewal. Replacing it it a system drawn from effective educational practise that focuses on knowledge, skill and professional conduct, an open and transparent assessment approach and data-informed curriculum and program improvement.
The gradual movement toward effective practise is refreshing to see in a discipline that may not always have the best reputation for embracing new developments in educational research and evaluation. These shifts illustrate a growing awareness of engineering institutions regarding effective assessment, the collection and analysis of educational data and a rapidly growing interest and expertise in educational technology. These results illustrate that Canadian engineering institutions are truly investing in bettering their programs and the engineering curriculum to improve student learning. There are also signs of institutions considering the long-term aspects of accreditation, and employing the characteristic systems-level thinking of engineering to consider elements of sustainability, impact, effectiveness and resources.
This analysis is only preliminary, as there is considerable opportunity to explore these results further and use the demographics to conduct a cross-tabular analysis by program characteristics and enrolment numbers. It is the intention of the EGAD project to release the full analysis of both survey administrations in an upcoming white paper which will be freely available on the EGAD website.
In the interest of conducting open and collaborative research, nearly all collection, analysis, writing and report generation was done using open-source or readily available technologies. These include FluidSurveys, R, RStudio, Github. This report was written in rMarkdown in RStudio, typeset using the knitr package & pandoc and hosted at Gitub. You can view this report, the code that created and associated analysis at https://github.com/EGADProject/EGAD-Snapshot-Survey.
The EGAD Project would like to acknoledge the funding and support provided by the National Council of Deans of Engineering and Applied Science (NCDEAS) and Engineers Canada.